# WebLI pretraining

Vit SO400M 16 SigLIP2 512
Apache-2.0
SigLIP 2 vision-language model trained on WebLI dataset, suitable for zero-shot image classification tasks
Text-to-Image
V
timm
1,191
4
Vit SO400M 16 SigLIP2 384
Apache-2.0
SigLIP 2 vision-language model trained on WebLI dataset, supporting zero-shot image classification tasks.
Text-to-Image
V
timm
106.30k
2
Vit SO400M 16 SigLIP2 256
Apache-2.0
SigLIP 2 vision-language model trained on WebLI dataset, supporting zero-shot image classification
Text-to-Image
V
timm
998
0
Vit L 16 SigLIP2 512
Apache-2.0
SigLIP 2 vision-language model trained on WebLI dataset, supporting zero-shot image classification tasks
Text-to-Image
V
timm
147
2
Vit L 16 SigLIP2 256
Apache-2.0
SigLIP 2 vision-language model trained on WebLI dataset, supporting zero-shot image classification
Text-to-Image
V
timm
888
0
Vit B 32 SigLIP2 256
Apache-2.0
SigLIP 2 vision-language model trained on WebLI dataset, supporting zero-shot image classification tasks
Text-to-Image
V
timm
691
0
Vit B 16 SigLIP2 256
Apache-2.0
SigLIP 2 vision-language model trained on the WebLI dataset, supporting zero-shot image classification tasks
Text-to-Image
V
timm
10.32k
4
Siglip Base Patch16 512
Apache-2.0
SigLIP is a vision-language model pretrained on the WebLi dataset, utilizing an improved sigmoid loss function, excelling in image classification and image-text retrieval tasks.
Text-to-Image Transformers
S
google
237.79k
24
Vit B 16 SigLIP I18n 256
Apache-2.0
A SigLIP (Sigmoid Loss for Language-Image Pre-training) model trained on the WebLI dataset, suitable for zero-shot image classification tasks.
Text-to-Image
V
timm
87.92k
3
Vit SO400M 14 SigLIP
Apache-2.0
A SigLIP (Sigmoid Loss for Language-Image Pretraining) model trained on the WebLI dataset, suitable for zero-shot image classification tasks.
Text-to-Image
V
timm
79.55k
17
Vit L 16 SigLIP 256
Apache-2.0
SigLIP (Sigmoid Loss for Language-Image Pre-training) model trained on the WebLI dataset for zero-shot image classification tasks.
Text-to-Image
V
timm
1,516
1
Vit B 16 SigLIP 384
Apache-2.0
SigLIP (Sigmoid Loss Language-Image Pretraining) model trained on the WebLI dataset for zero-shot image classification tasks
Text-to-Image
V
timm
4,119
4
Vit B 16 SigLIP 256
Apache-2.0
A SigLIP (Sigmoid Loss for Language-Image Pre-training) model trained on the WebLI dataset, suitable for zero-shot image classification tasks.
Text-to-Image
V
timm
17.15k
1
Siglip Base Patch16 224
Apache-2.0
SigLIP is a vision-language model pretrained on the WebLi dataset, utilizing an improved Sigmoid loss function to optimize image-text matching tasks
Image-to-Text Transformers
S
google
250.28k
43
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase